698 research outputs found

    Feeling the Shape: Active Exploration Behaviors for Object Recognition With a Robotic Hand

    Get PDF
    Autonomous exploration in robotics is a crucial feature to achieve robust and safe systems capable to interact with and recognize their surrounding environment. In this paper, we present a method for object recognition using a three-fingered robotic hand actively exploring interesting object locations to reduce uncertainty. We present a novel probabilistic perception approach with a Bayesian formulation to iteratively accumulate evidence from robot touch. Exploration of better locations for perception is performed by familiarity and novelty exploration behaviors, which intelligently control the robot hand to move toward locations with low and high levels of interestingness, respectively. These are active behaviors that, similar to the exploratory procedures observed in humans, allow robots to autonomously explore locations they believe that contain interesting information for recognition. Active behaviors are validated with object recognition experiments in both offline and real-time modes. Furthermore, the effects of inhibiting the active behaviors are analyzed with a passive exploration strategy. The results from the experiments demonstrate the accuracy of our proposed methods, but also their benefits for active robot control to intelligently explore and interact with the environment

    MIRO: A robot “Mammal” with a biomimetic brain-based control system

    Get PDF
    We describe the design of a novel commercial biomimetic brain-based robot, MIRO, developed as a prototype robot companion. The MIRO robot is animal-like in several aspects of its appearance, however, it is also biomimetic in a more significant way, in that its control architecture mimics some of the key principles underlying the design of the mammalian brain as revealed by neuroscience. Specifically, MIRO builds on decades of previous work in developing robots with brain-based control systems using a layered control architecture alongside centralized mechanisms for integration and action selection. MIRO’s control system operates across three core processors, P1-P3, that mimic aspects of spinal cord, brainstem, and forebrain functionality respectively. Whilst designed as a versatile prototype for next generation companion robots, MIRO also provides developers and researchers with a new platform for investigating the potential advantages of brain-based control

    Adaptive perception: learning from sensory predictions to extract object shape with a biomimetic fingertip

    Get PDF
    In this work, we present an adaptive perception method to improve the performance in accuracy and speed of a tactile exploration task. This work extends our previous studies on sensorimotor control strategies for active tactile perception in robotics. First, we present the active Bayesian perception method to actively reposition a robot to accumulate evidence from better locations to reduce uncertainty. Second, we describe the adaptive perception method that, based on a forward model and a predicted information gain approach, allows to the robot to analyse `what would have happened' if a different decision `would have been made' at previous decision time. This approach permits to adapt the active Bayesian perception process to improve the performance in accuracy and reaction time of an exploration task. Our methods are validated with a contour following exploratory procedure with a touch sensor. The results show that the adaptive perception method allows the robot to make sensory predictions and autonomously adapt, improving the performance of the exploration task

    Expressive touch: Control of robot emotional expression by touch

    Get PDF
    In this paper, we present a work on control of robot emotional expression using touch sensing. A tactile Bayesian framework is proposed for recognition of different types of touch gestures. We include a sequential analysis method that, based on the accumulation of evidence from tactile interaction, allows to achieve accurate results for recognition of touch. Input data to our method is obtained from touch sensing, which is an important modality for social robotics. Here, emotion in the robot platform are represented by facial expressions, that are handled by a developed control architecture. We validate our method with experiments on tactile interaction in simulated and real robot environments. Results demonstrate that our proposed method is suitable and accurate for control of robot emotions through interaction with humans using touch sensing. Furthermore, it is demonstrated the potential that touch provides as a non-verbal communication channel for the development of social robots capable to interact with humans

    Machines Learning - Towards a New Synthetic Autobiographical Memory

    Get PDF
    Autobiographical memory is the organisation of episodes and contextual information from an individual’s experiences into a coherent narrative, which is key to our sense of self. Formation and recall of autobiographical memories is essential for effective, adaptive behaviour in the world, providing contextual information necessary for planning actions and memory functions such as event reconstruction. A synthetic autobiographical memory system would endow intelligent robotic agents with many essential components of cognition through active compression and storage of historical sensorimotor data in an easily addressable manner. Current approaches neither fulfil these functional requirements, nor build upon recent understanding of predictive coding, deep learning, nor the neurobiology of memory. This position paper highlights desiderata for a modern implementation of synthetic autobiographical memory based on human episodic memory, and proposes that a recently developed model of hippocampal memory could be extended as a generalised model of autobiographical memory. Initial implementation will be targeted at social interaction, where current synthetic autobiographical memory systems have had success

    Multisensory wearable interface for immersion and telepresence in robotics

    Get PDF
    The idea of being present in a remote location has inspired researchers to develop robotic devices that make humans to experience the feeling of telepresence. These devices need of multiple sensory feedback to provide a more realistic telepresence experience. In this work, we develop a wearable interface for immersion and telepresence that provides to human with the capability of both to receive multisensory feedback from vision, touch and audio and to remotely control a robot platform. Multimodal feedback from a remote environment is based on the integration of sensor technologies coupled to the sensory system of the robot platform. Remote control of the robot is achieved by a modularised architecture, which allows to visually explore the remote environment. We validated our work with multiple experiments where participants, located at different venues, were able to successfully control the robot platform while visually exploring, touching and listening a remote environment. In our experiments we used two different robotic platforms: the iCub humanoid robot and the Pioneer LX mobile robot. These experiments show that our wearable interface is comfortable, easy to use and adaptable to different robotic platforms. Furthermore, we observed that our approach allows humans to experience a vivid feeling of being present in a remote environment

    Active Control for Object Perception and Exploration with a Robotic Hand

    Get PDF
    We present an investigation on active control for intelligent object exploration using touch with a robotic hand. First, uncertainty from the exploration is reduced by a probabilistic method based on the accumulation of evidence through the interaction with an object of interest. Second, an intrinsic motivation approach allows the robot hand to perform intelligent active control of movements to explore interesting locations of the object. Passive and active perception and exploration were implemented in simulated and real environments to compare their benefits in accuracy and reaction time. The validation of the proposed method were performed with an object recognition task, using a robotic platform composed by a three-fingered robotic hand and a robot table. The results demonstrate that our method permits the robotic hand to achieve high accuracy for object recognition with low impact on the reaction time required to perform the task. These benefits make our method suitable for perception and exploration in autonomous robotics

    Towards a wearable interface for immersive telepresence in robotics

    Get PDF
    In this paper we present an architecture for the study of telepresence, immersion and human-robot interaction. The architecture is built around a wearable interface that provides the human user with visual, audio and tactile feedback from a remote location. We have chosen to interface the system with the iCub humanoid robot, as it mimics many human sensory modalities, including vision (with gaze control) and tactile feedback, which offers a richly immersive experience for the human user. Our wearable interface allows human participants to observe and explore a remote location, while also being able to communicate verbally with others located in the remote environment. Our approach has been tested from a variety of distances, including university and business premises, and using wired, wireless and Internet based connections, using data compression to maintain the quality of the experience for the user. Initial testing has shown the wearable interface to be a robust system of immersive teleoperation, with a myriad of potential applications, particularly in social networking, gaming and entertainment

    Memory and mental time travel in humans and social robots

    Get PDF
    From neuroscience, brain imaging, and the psychology of memory we are beginning to assemble an integrated theory of the brain sub-systems and pathways that allow the compression, storage and reconstruction of memories for past events and their use in contextualizing the present and reasoning about the future—mental time travel (MTT). Using computational models, embedded in humanoid robots, we are seeking to test the sufficiency of this theoretical account and to evaluate the usefulness of brain-inspired memory systems for social robots. In this contribution, we describe the use of machine learning techniques—Gaussian process latent variable models—to build a multimodal memory system for the iCub humanoid robot and summarise results of the deployment of this system for human-robot interaction. We also outline the further steps required to create a more complete robotic implementation of human-like autobiographical memory and MTT. We propose that generative memory models, such as those that form the core of our robot memory system, can provide a solution to the symbol grounding problem in embodied artificial intelligence

    The synthetic psychology of the self

    Get PDF
    Synthetic psychology describes the approach of “understanding through building” applied to the human condition. In this chapter, we consider the specific challenge of synthesizing a robot “sense of self”. Our starting hypothesis is that the human self is brought into being by the activity of a set of transient self-processes instantiated by the brain and body. We propose that we can synthesize a robot self by developing equivalent sub-systems within an integrated biomimetic cognitive architecture for a humanoid robot. We begin the chapter by motivating this work in the context of the criteria for recognizing other minds, and the challenge of benchmarking artificial intelligence against human, and conclude by describing efforts to create a sense of self for the iCub humanoid robot that has ecological, temporally-extended, interpersonal and narrative components set within a multi-layered model of mind
    • …
    corecore